149 research outputs found

    Distributed averaging for accuracy prediction in networked systems

    Full text link
    Distributed averaging is among the most relevant cooperative control problems, with applications in sensor and robotic networks, distributed signal processing, data fusion, and load balancing. Consensus and gossip algorithms have been investigated and successfully deployed in multi-agent systems to perform distributed averaging in synchronous and asynchronous settings. This study proposes a heuristic approach to estimate the convergence rate of averaging algorithms in a distributed manner, relying on the computation and propagation of local graph metrics while entailing simple data elaboration and small message passing. The protocol enables nodes to predict the time (or the number of interactions) needed to estimate the global average with the desired accuracy. Consequently, nodes can make informed decisions on their use of measured and estimated data while gaining awareness of the global structure of the network, as well as their role in it. The study presents relevant applications to outliers identification and performance evaluation in switching topologies

    A Lossy Compression Technique Enabling Duplication-Aware Sequence Alignment

    Get PDF
    In spite of the recognized importance of tandem duplications in genome evolution, commonly adopted sequence comparison algorithms do not take into account complex mutation events involving more than one residue at the time, since they are not compliant with the underlying assumption of statistical independence of adjacent residues. As a consequence, the presence of tandem repeats in sequences under comparison may impair the biological significance of the resulting alignment. Although solutions have been proposed, repeat-aware sequence alignment is still considered to be an open problem and new efficient and effective methods have been advocated. The present paper describes an alternative lossy compression scheme for genomic sequences which iteratively collapses repeats of increasing length. The resulting approximate representations do not contain tandem duplications, while retaining enough information for making their comparison even more significant than the edit distance between the original sequences. This allows us to exploit traditional alignment algorithms directly on the compressed sequences. Results confirm the validity of the proposed approach for the problem of duplication-aware sequence alignment

    A Monte Carlo Method for Assessing the Quality of Duplication-Aware Alignment Algorithms

    Get PDF
    The increasing availability of high throughput sequencing technologies poses several challenges concerning the analysis of genomic data. Within this context, duplication-aware sequence alignment taking into account complex mutation events is regarded as an important problem, particularly in light of recent evolutionary bioinformatics researches that highlighted the role of tandem duplications as one of the most important mutation events. Traditional sequence comparison algorithms do not take into account these events, resulting in poor alignments in terms of biological significance, mainly because of their assumption of statistical independence among contiguous residues. Several duplication-aware algorithms have been proposed in the last years which differ either for the type of duplications they consider or for the methods adopted to identify and compare them. However, there is no solution which clearly outperforms the others and no methods exist for assessing the reliability of the resulting alignments. This paper proposes a Monte Carlo method for assessing the quality of duplication-aware alignment algorithms and for driving the choice of the most appropriate alignment technique to be used in a specific context

    Supporting Preemptive Multitasking in Wireless Sensor Networks

    Get PDF
    Supporting the concurrent execution of multiple tasks on lightweight sensor nodes could enable the deployment of independent applications on a shared wireless sensor network, thus saving cost and time by exploiting infrastructures which are typically underutilized if dedicated to a single task. Existing approaches to wireless sensor network programming provide limited support to concurrency at the cost of reducing the generality and the expressiveness of the language adopted. This paper presents a java-compatible platform for wireless sensor networks which provides a thorough support to preemptive multitasking while allowing the programmers to write their applications in java. The proposed approach has been implemented and tested on top of VirtualSense, an ultra-low-power wireless sensor mote providing a java-compatible runtime environment. Performance and scalability of the solution are discussed in light of extensive experiments performed on representative benchmarks

    Large-scale assessment of mobile crowdsensed data: a case study

    Get PDF
    Mobile crowdsensing (MCS) is a well-established paradigm that leverages mobile devices’ ubiquitous nature and processing capabilities for large-scale data collection to monitor phenomena of common interest. Crowd-powered data collection is significantly faster and more cost-effective than traditional methods. However, it poses challenges in assessing the accuracy and extracting information from large volumes of user-generated data. SmartRoadSense (SRS) is an MCS technology that utilises sensors embedded in mobile phones to monitor the quality of road surfaces by computing a crowdsensed road roughness index (referred to as PPE). The present work performs statistical modelling of PPE to analyse its distribution across the road network and elucidate how it can be efficiently analysed and interpreted. Joint statistical analysis of open datasets is then carried out to investigate the effect of both internal and external road features on PPE . Several road properties affecting PPE as predicted are identified, providing evidence that SRS can be effectively applied to assess road quality conditions. Finally, the effect of road category and the speed limit on the mean and standard deviation of PPE is evaluated, incorporating previous results on the relationship between vehicle speed and PPE . These results enable more effective and confident use of the SRS platform and its data to help inform road construction and renovation decisions, especially where a lack of resources limits the use of conventional approaches. The work also exemplifies how crowdsensing technologies can benefit from open data integration and highlights the importance of making coherent, comprehensive, and well-structured open datasets available to the public

    Bootstrap Based Uncertainty Propagation for Data Quality Estimation in Crowdsensing Systems

    Get PDF
    The diffusion of mobile devices equipped with sensing, computation, and communication capabilities is opening unprecedented possibilities for high-resolution, spatio-temporal mapping of several phenomena. This novel data generation, collection, and processing paradigm, termed crowdsensing, lays upon complex, distributed cyberphysical systems. Collective data gathering from heterogeneous, spatially distributed devices inherently raises the question of how to manage different quality levels of contributed data. In order to extract meaningful information, it is, therefore, desirable to the introduction of effective methods for evaluating the quality of data. In this paper, we propose an approach aimed at systematic accuracy estimation of quantities provided by end-user devices of a crowd-based sensing system. This is obtained thanks to the combination of statistical bootstrap with uncertainty propagation techniques, leading to a consistent and technically sound methodology. Uncertainty propagation provides a formal framework for combining uncertainties, resulting from different quantities influencing a given measurement activity. Statistical bootstrap enables the characterization of the sampling distribution of a given statistics without any prior assumption on the type of statistical distributions behind the data generation process. The proposed approach is evaluated on synthetic benchmarks and on a real world case study. Cross-validation experiments show that confidence intervals computed by means of the presented technique show a maximum 1.5% variation with respect to interval widths computed by means of controlled standard Monte Carlo methods, under a wide range of operating conditions. In general, experimental results confirm the suitability and validity of the introduced methodology

    Decentralising the Internet of Medical Things with Distributed Ledger Technologies and Off-Chain Storages: a Proof of Concept

    Get PDF
    The privacy issue limits the Internet of Medical Things. Medical information would enhance new medical studies, formulate new treatments, and deliver new digital health technologies. Solving the sharing issue will have a triple impact: handling sensitive information easily, contributing to international medical advancements, and enabling personalised care. A possible solution could be to decentralise the notion of privacy, distributing it directly to users. Solutions enabling this vision are closely linked to Distributed Ledger Technologies. This technology would allow privacy-compliant solutions in contexts where privacy is the first need through its characteristics of immutability and transparency. This work lays the foundations for a system that can provide adequate security in terms of privacy, allowing the sharing of information between participants. We introduce an Internet of Medical Things application use case called “Balance”, networks of trusted peers to manage sensitive data access called “Halo”, and eventually leverage Smart Contracts to safeguard third party rights over data. This architecture should enable the theoretical vision of privacy-based healthcare solutions running in a decentralised manner

    Direct Measures of Path Delays on Commercial FPGA Chips

    Full text link
    We present a general technique for measuring the propagation delay on the internal wires of FPGA chips. The measure is based on the comparison between the operating frequencies of two ring oscillators that differ only for the structure under test, that is included (or not) in the loop. Experimental results are presented for a device of the Xilinx XC4000 family

    Discovering Lexical Similarity Using Articulatory Feature-Based Phonetic Edit Distance

    Get PDF
    Lexical Similarity (LS) between two languages uncovers many interesting linguistic insights such as phylogenetic relationship, mutual intelligibility, common etymology, and loan words. There are various methods through which LS is evaluated. This paper presents a method of Phonetic Edit Distance (PED) that uses a soft comparison of letters using the articulatory features associated with their International Phonetic Alphabet (IPA) transcription. In particular, the comparison between the articulatory features of two letters taken from words belonging to different languages is used to compute the cost of replacement in the inner loop of edit distance computation. As an example, PED gives edit distance of 0.82 between German word ‘vater’ ([fa:tər]) and Persian word ‘ ’ ([pedær]), meaning ‘father,’ and, similarly, PED of 0.93 between Hebrew word ‘ ’ ([ʃəɭam]) and Arabic word ‘ ’ ([səɭa:m], meaning ‘peace,’ whereas classical edit distances would be 4 and 2, respectively. We report the results of systematic experiments conducted on six languages: Arabic, Hindi, Marathi, Persian, Sanskrit, and Urdu. Universal Dependencies (UD) corpora were used to restrict the comparison to lists of words belonging to the same part of speech. The LS based on the average PED between pair of words was then computed for each pair of languages, unveiling similarities otherwise masked by the adoption of different alphabets, grammars, and pronunciations rules
    corecore